perm filename DARPA.2[W87,JMC] blob sn#834392 filedate 1987-02-16 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	darpa.2[w87,jmc]		Relevance of work
C00013 ENDMK
CāŠ—;
darpa.2[w87,jmc]		Relevance of work

	Since much of this work is basic research, its relevance to DoD
concerns needs to be described.  DoD, especially via the Strategic
Computing program, is increasingly preparing to use expert systems.  These
expert systems are intended embody in computer programs the knowledge and
reasoning ability of an expert in a given domain and make this capability
available to military personnel doing their jobs.  The autonomous land
vehicle, the pilot's associate and the naval battle management projects
are specific embodiments of this technology.  Our work will help identify
and overcome important limitations of present expert system technology.
A 1983 view of these limits and how they might be overcome is in
(McCarthy 1983), ``Some Expert Systems Need Common Sense'',
 included in this proposal as Appendix A.

	Here is a concise version of our present opinion of these limitations.

	1. We can say that a program has {\it common sense} if it
has available to it sufficiently obvious consequences of what it
learns about a present situation and the general knowledge it has
stored in its database.  For example, a battle management program
needs to be able to react correctly to being told that the enemy is
short of ammunition for a certain weapon.

	2. For many purposes, the most important kind of common sense
knowledge deals with how the consequences of the various actions the
system might take depend on the facts of the situation.  Facts about
the consequences of other events including the actions of other entities
also need to be represented and used.  For example, a battle management
program needs to know the effects of ordering a ship to move from one
location to another.

	3. The representation of facts about the consequences of actions
and other events presents many problems.  Not solving them correctly
makes the programs ``brittle''.  For example, from one point of view,
moving a ship is a unitary act.  The consequence of moving the ship
is that it is in the location to which it was ordered.  However, there
are preconditions for moving the ship, e.g. it must have fuel and must
be in working order.

	4. Non-monotonic reasoning is required for correctly handling the
consequences of actions.  (McCarthy 1986), ``Applications of
Circumscription to Formalizing Common Sense Knowledge'', explains about the
importance of non-monotonic reasoning in common sense.  It is included in
the proposal as Appendix B.  See especially sections 1 and 2.

	Here's an example.  Once we think about it, it's a precondition
for moving a ship that it be possible to raise or detach its anchor
and that it not be blocked by other ships or by a barrier.  However,
not  all these conditions can be listed specifically.  Instead, after
listing the specific conditions, it is necessary to add a clause that
the ship can be moved unless something else prevents it.  Recall that
the Swedes contemplated building a concrete barrier around the Soviet
submarine that had run aground.  It sure wasn't specifically in
anyone's mind or anyone's database that there be non concrete barrier
as a condition for moving a submarine.  Human common sense handles
such matters by non-monotonic reasoning.

	5. Computing in general, and AI in particular, is moving in
the direction of representing facts declaratively rather than imbedding
them in programs.  Prolog is an example of a programming language
whose programs themselves are collections of facts.  The various AI
shells like Emycin, OPS-5, KEE and ART also represent as much information
as possible declaratively.  This makes the programs much easier to
design, debug and modify.  However, none of these systems go as far
as human use of declarative representation.

	6. A human can be told a new fact by someone who does not
understand how the recipient is going to use it.  While new facts
can be added to programs in any of the above systems, this has to
be done for most kinds of facts by a programmer who understands
the existing expert system.  For example, an expert system about
ships can perhaps be told about a new ship by a non-programmer who
is asked for its name, displacement and other qualities that have
been included in ship descriptions.  However, a present-day system ordinarily
could not be told the effects of a new enemy weapon except by someone
familiar with the program.  The consequence may be that the users of
such a system may face an unpleasant choice between getting wrong
answers from the system and turning it off until it can be modified
and the modification debugged.

	7. As the declarative or logicist approach to AI advances,
the class of statements that can be made directly to an expert
system without reprogramming it or even understanding it in detail
increases.  Already formalized non-monotonic reasoning has increased
our ability to tell systems about new conditions that may interfere
with the plans of an agent and how they may be overcome.

	8. Recently the non-monotonic formalisms concerned with
determining the effects of actions were tested by ``the Yale
shooting problem''.  Lifschitz (1987) and others have developed
ways of solving the difficulty thus presented.  We plan further
development of these formalisms.

	9. From a general point of view, non-monotonic reasoning
methods allow us to say which {\it models} of a collection of facts
are preferred, i.e. which of the possibilities allowed by the
facts are to be assumed.  Present non-monotonic formalisms make
these choices depend only on the objective circumstances.  It may
be that the choice should in some cases depend also on the system's
goals and state of knowledge.  Technically, this involves making
the {\it circumscriptions} depend on {\it intensional} as well
as {\it extensional} information.  We are exploring how to do this.

 	10. It has further become apparent that human information
is expressed in ways that depend heavily on context and that AI
systems will require similar capability.  This
has been obscured in AI systems by the fact that these systems
work in single, limited contexts.  For example, in one context
moving a ship from one place to another is a unitary act, and
the system represents directly facts about its preconditions and
consequences.  From another point of view, the same action is
the result of a complex strategy and any instance of moving the
ship is compounded from a substantial number and variety of
different actions.  The facts about moving ships need to be
represented in a variety of levels of detail.

	11. All the above problems and more come together in the
task of creating a database for common sense reasoning.  Therefore,
the creation of such a database is the major applied task we
are undertaking as part of this proposal.